If you build or deploy AI systems that screen job candidates, rank CVs, score interviews, monitor employee performance, or determine which gig workers receive task assignments, EU AI Act Annex III Point 4 almost certainly classifies your system as high-risk — triggering full Title III obligations before the August 2026 general application deadline. The compliance gap is acute and largely unaddressed: Personio, Germany's largest HR-tech company with over 10,000 DACH customers, has not published a conformity assessment or EU AI Act compliance statement for its AI Candidate Matching and AI Interview Scoring features. Workday — dominant in large European enterprises — is a US entity with direct CLOUD Act exposure on employee screening data, yet European HR teams continue to treat its ATS AI as a standard SaaS tool rather than a high-risk AI system under EU law. A second compliance layer that most European companies ignore entirely: German employers face §87(1) Nr.6 BetrVG works council co-determination obligations that run parallel to EU AI Act requirements for any AI system that monitors or evaluates employee performance — creating a double-compliance burden that neither the AI Act nor most HR-tech vendors have publicly addressed.
What Annex III Point 4 Actually Covers
Annex III Point 4 of the EU AI Act applies to three distinct categories of AI systems in employment, workers management, and access to self-employment:
(a) Recruitment and selection: AI systems intended to be used for recruitment or selection of natural persons, notably for advertising vacancies, screening or filtering applications, evaluating candidates in the course of interviews or tests. This covers ATS (Applicant Tracking System) AI, CV ranking algorithms, job board recommendation engines that target specific candidates, automated video interview analysis, and any AI that filters or scores candidates in the hiring pipeline.
(b) Promotion and termination decisions: AI systems intended to be used for making decisions affecting terms and conditions of work, including for the promotion and termination of employment relationships, and for the allocation of tasks and monitoring of performance. This is the broadest category — it encompasses performance management AI, productivity monitoring platforms, AI-based task allocation systems, and any AI system whose output informs decisions about employee career progression or continued employment.
(c) Access to self-employment: AI systems intended to be used for evaluating and classifying persons for access to self-employment contracting via platforms. This covers gig economy platform AI — ride-hailing dispatch algorithms, food delivery task allocation AI, freelance marketplace ranking engines, and any AI system that determines whether a self-employed person receives work opportunities through a digital platform.
The scope is deliberately broad. The "employment relationships" formulation covers both employment and self-employment. The "allocation of tasks" element captures gig platforms regardless of whether the worker is classified as an employee. The "advertising vacancies" element captures job board AI that targets specific candidate profiles — meaning the high-risk classification can apply upstream of the application process itself.
The Three High-Risk Employment AI Categories in Practice
Category A — Recruitment and Selection AI
The clearest Annex III Point 4 cases are ATS systems with integrated AI ranking or filtering capabilities. Most enterprise HR software now includes AI screening features — the compliance question is whether those features trigger the high-risk classification.
| AI System | High-Risk? | Reason |
|---|---|---|
| ATS AI ranking CVs by predicted job fit score | HIGH-RISK | Filtering applications, screening candidates |
| Automated video interview analysis scoring communication skills | HIGH-RISK | Evaluating candidates in interview process |
| LinkedIn Recruiter AI ranking candidates for HR outreach | HIGH-RISK (probable) | Filtering and screening candidates for recruitment pipeline |
| Job board AI targeting specific candidate profiles with vacancy ads | HIGH-RISK | Advertising vacancies to screened candidate subset |
| AI pre-screening chatbot eliminating candidates based on screening questions | HIGH-RISK | Filtering applications before human review |
| Personio AI Candidate Matching for open positions | HIGH-RISK (probable) | Screening and filtering candidates in hiring pipeline |
| Rule-based keyword filter (no ML, no scoring) | NOT HIGH-RISK | Not an AI system under EU AI Act definition |
| Generic job recommendation engine showing open roles to job seekers | Context-dependent | High-risk if employer-facing screening; lower risk if candidate-facing discovery |
| Spell-checker in job application portal | NOT HIGH-RISK | Productivity tool, not candidate evaluation |
The critical threshold for Category A is candidate-facing filtering or scoring authority: an AI system that determines which candidates an employer sees or considers, or that scores candidates in a way that effectively decides who advances in the hiring process, is high-risk regardless of whether a human technically makes the final decision.
Category B — Performance Monitoring and Termination AI
Category B is the most commercially sensitive classification because it covers the AI features embedded in widely deployed enterprise HR platforms — performance review systems, productivity monitoring tools, and task management platforms. Most large European employers are running at least one system that touches this category.
| AI System | High-Risk? | Reason |
|---|---|---|
| AI performance review system scoring employees on quarterly objectives | HIGH-RISK | Monitoring and evaluating performance for promotion decisions |
| Productivity monitoring AI scoring remote workers by activity metrics | HIGH-RISK | Monitoring performance, informs termination |
| Task allocation AI determining which employees receive high-visibility assignments | HIGH-RISK | Allocation of tasks affecting career trajectory |
| AI system monitoring employee behaviour during Probezeit to inform termination | HIGH-RISK | Performance monitoring affecting employment termination |
| Workday Talent Optimization AI ranking employee flight-risk and performance | HIGH-RISK | Monitoring/evaluating performance for promotion/termination |
| AI scheduling tool assigning shifts based on availability preferences | Context-dependent | Not high-risk if preference-driven; high-risk if performance-scoring factors included |
| Employee engagement survey AI detecting sentiment patterns | NOT HIGH-RISK | Advisory analytics, no direct decision authority |
| AI suggesting training content based on skill gaps | NOT HIGH-RISK | Development tool, not performance evaluation |
The Probezeit Edge Case: German employment law creates a specific high-risk scenario. During the Kündigungsschutz-free Probezeit (typically the first 6 months of employment), employers can terminate with 2-week notice and limited justification. Any AI system that monitors employee behaviour during this period and whose output informs the termination decision — email responsiveness analytics, meeting attendance AI, output tracking — is HIGH-RISK under Annex III Point 4(b). The combination of high AI influence and low legal protection for the affected employee creates exactly the risk profile the EU AI Act targets.
Category C — Gig Economy and Self-Employment Platform AI
Category C creates high-risk classification for the AI systems at the core of platform economy business models. Task allocation algorithms, dynamic pricing engines, and account deactivation AI at Uber, Deliveroo, Gorillas, Flink, Fiverr, and similar platforms all fall under Annex III Point 4(c).
European courts have been ahead of regulators here: the Uber Spain Supreme Court ruling (2020), Gorillas Netherlands court decisions (2022), and multiple French Cour de Cassation rulings have established that platform workers can have employment status despite nominal contractor classification. The EU AI Act adds a compliance layer that applies regardless of worker classification — the high-risk obligations attach to the AI system itself, not to the employment relationship status.
| AI System | High-Risk? | Reason |
|---|---|---|
| Ride-hailing dispatch AI allocating trips to drivers | HIGH-RISK | Allocating tasks determining access to self-employment earnings |
| Delivery platform AI dynamically pricing delivery fees and assigning orders | HIGH-RISK | Task allocation determining self-employed worker earnings |
| Freelance marketplace AI ranking contractor profiles in search results | HIGH-RISK | Evaluating and classifying persons for self-employment access |
| Platform AI account deactivation triggered by automated fraud scoring | HIGH-RISK | Classification affecting access to self-employment |
| Gig platform AI setting dynamic hourly rates offered to contractors | HIGH-RISK | Terms and conditions of self-employment access |
| Uber Eats order assignment algorithm | HIGH-RISK | Task allocation for self-employed delivery workers |
| Social media algorithm showing freelancer's portfolio to potential clients | NOT HIGH-RISK | Discovery feature, not employment classification |
The Personio Compliance Gap
Personio GmbH (Munich) is Germany's largest HR software company and one of Europe's highest-valued HR-tech scale-ups. It serves over 10,000 businesses — predominantly German Mittelstand and DACH enterprise — and has raised over €700 million in venture funding. Its AI features are central to its product roadmap and competitive positioning.
The Annex III Point 4 problem is not theoretical for Personio. Its current product includes:
- AI Candidate Matching: Automatically ranks candidates against job requirements using ML-based fit scoring. This is unambiguously within Annex III Point 4(a) — it screens and filters applicants in the recruitment pipeline.
- AI Job Description Generator: Uses AI to generate job postings targeting specific candidate profiles. Where the targeting criteria include candidate selection logic, this can fall within the "advertising vacancies" element of Point 4(a).
- AI-Powered Performance Reviews: Performance management features that assist in structuring and scoring employee reviews. Where these outputs inform promotion decisions, they fall within Point 4(b).
As of the date of this article, Personio has not published a conformity assessment for these AI features, has not announced EU Database registration plans, and has not disclosed an EU AI Act compliance timeline. German Mittelstand companies using Personio's AI features as deployers have their own Article 26 deployer obligations — including fundamental rights impact assessments and ensuring their human oversight arrangements satisfy Art.14 — that they must fulfil independently of whatever Personio (the provider) eventually publishes.
The practical question for Personio customers is: when Personio ships a conformity assessment and CE marking for its AI Candidate Matching, will you receive the technical documentation and AI system card you need to complete your own deployer obligations? Most mid-size companies using Personio have not considered this dependency.
Workday and the CLOUD Act Employment Data Problem
Workday, Inc. (Pleasanton, California) is one of the two dominant enterprise HR platforms in large European corporations alongside SAP SuccessFactors. Workday's AI features — Workday Talent Optimization, Workday Recruiting AI, and the VNDLY acquired contingent workforce management — are embedded in the HR operations of thousands of European employers.
The Annex III Point 4 compliance problem for European Workday customers has two dimensions:
Dimension 1 — CLOUD Act exposure: Workday is a US entity. Employee CV data, interview notes, performance reviews, promotion decisions, and termination documentation processed through Workday's AI features are accessible to US authorities under the CLOUD Act regardless of whether processing occurs on EU servers. GDPR Art.48 provides no valid transfer basis for CLOUD Act compelled disclosure — the conflict between US law and GDPR is unresolved. For high-risk employment AI, this creates a situation where the AI's training and operation data for European employees is potentially accessible to US government requests.
Dimension 2 — Bias litigation inheritance: In 2023, Workday faced a federal class-action lawsuit in California (Derek Mobley v. Workday, Inc.) alleging that its ATS AI discriminated against Black, disabled, and over-40 candidates across multiple employer clients simultaneously. The litigation alleged that Workday's AI screening tools — not individual employer decisions — were the proximate cause of systematic discrimination.
European employers using Workday ATS cannot assume that a product developed and litigated under US anti-discrimination law satisfies EU AI Act Art.9 and Art.10 requirements. Art.9 requires risk management throughout the AI system lifecycle including monitoring for discriminatory outcomes. Art.10 requires that training data be representative, free from errors, and subject to data governance practices that address bias. European deployers need their own conformity assessments — relying on Workday documentation designed for US customers is legally insufficient.
The comparison with SAP SuccessFactors is instructive: SAP SE is a German entity (Walldorf), which eliminates the CLOUD Act exposure problem. SAP's AI features for SuccessFactors are subject to EU AI Act obligations, but at least the regulatory exposure is within European jurisdiction. The CLOUD Act problem is specific to Workday, Oracle HCM, and other US-headquartered platforms.
LinkedIn Recruiter AI: Microsoft's High-Risk Classification Problem
LinkedIn (Microsoft Ireland Operations Ltd, Dublin) occupies a unique position in Annex III Point 4 analysis. LinkedIn Recruiter — the enterprise product used by HR teams to search, filter, and contact candidates — uses AI to rank candidates in recruiter search results and recommend profiles. LinkedIn Talent Insights uses AI to analyse candidate pools and predict talent availability.
The high-risk classification question turns on whether LinkedIn Recruiter AI's ranking and filtering of candidate profiles for employer outreach constitutes "screening or filtering applications" under Point 4(a). The argument that it does is strong: when an HR manager searches for candidates and LinkedIn's AI determines which 20 profiles appear in the first page of results, that AI decision is functionally indistinguishable from an ATS filtering CVs. The candidate whose profile is ranked lower does not reach the recruiter's consideration — the AI effectively screens them out.
Microsoft has not published an EU AI Act conformity assessment for LinkedIn Recruiter, has not announced planned AI Database registration for LinkedIn's HR AI features, and has not published a conformity schedule. This is significant given Microsoft's public EU AI Act compliance commitments for other products (Azure AI services, Copilot for Microsoft 365).
European companies using LinkedIn Recruiter should document their human oversight arrangements — the degree to which HR staff independently search beyond AI-recommended profiles — as part of their own deployer obligations under Art.26. Where LinkedIn's AI ranking is the de facto shortlist, the deployer's human oversight is nominal, and the deployer bears the compliance gap risk.
The §87(1) BetrVG German Compliance Layer
For German employers, EU AI Act Annex III Point 4 compliance operates above a baseline of Works Council (Betriebsrat) co-determination rights that already require approval for employment monitoring AI.
§87(1) Nr.6 BetrVG (Betriebsverfassungsgesetz — Works Constitution Act) gives the Betriebsrat co-determination rights over the "introduction and use of technical devices designed to monitor the behaviour or performance of employees." The Federal Labour Court (BAG) has interpreted this broadly: any technical system capable of systematically recording employee behaviour data — including AI performance monitoring, productivity tracking, and email analytics — requires Works Council agreement through a Betriebsvereinbarung (works agreement) before deployment.
The double-compliance structure for German employers looks like this:
| Obligation | EU AI Act | BetrVG §87(1) Nr.6 |
|---|---|---|
| Risk assessment before deployment | Art.9 risk management | Betriebsrat consultation and agreement |
| Human oversight documentation | Art.14 human oversight measures | Betriebsvereinbarung defines AI decision boundaries |
| Transparency to affected persons | Art.13 transparency requirements | Betriebsrat can require employee notification |
| Ongoing monitoring for bias/errors | Art.9 post-market monitoring | BAG requires modification agreement for system changes |
| Right to contest AI decisions | Art.26 deployer obligations | BetrVG §83 file access, §85 complaint right |
In practice, this means German HR teams deploying any AI performance monitoring system face a two-track approval process: EU AI Act conformity assessment for the system (with appropriate Art.14 human oversight measures), and Betriebsvereinbarung negotiation with the Works Council covering the same system. These processes are not coordinated — HR vendors rarely provide Betriebsvereinbarung-ready documentation alongside conformity assessments, leaving German employers to bridge the gap themselves.
The practical implication: German employers should negotiate their Betriebsvereinbarung terms before finalising AI vendor contracts, because Works Council rejection or modification of approved terms can effectively block deployment of a system that has already passed conformity assessment.
Python EmploymentAIComplianceClassifier
from dataclasses import dataclass, field
from enum import Enum
from typing import Optional
class AnnexIIIPoint4Category(Enum):
RECRUITMENT_SELECTION = "4a" # Advertising, screening, evaluating candidates
PERFORMANCE_MONITORING = "4b" # Promotion, termination, task allocation, monitoring
SELF_EMPLOYMENT_ACCESS = "4c" # Gig platform task allocation, account classification
NOT_HIGH_RISK = "NOT_HIGH_RISK"
class CloudActExposure(Enum):
HIGH = "HIGH" # US entity, direct CLOUD Act compelled disclosure risk
MEDIUM = "MEDIUM" # EU entity but US parent or mixed infrastructure
LOW = "LOW" # EU sovereign, no US jurisdiction
@dataclass
class EmploymentAIClassification:
system_name: str
category: AnnexIIIPoint4Category
cloud_act_exposure: CloudActExposure
betrvg_87_1_applies: bool # German Works Council co-determination required
rationale: str
obligations: list[str] = field(default_factory=list)
not_high_risk_reason: Optional[str] = None
def classify_employment_ai(
system_name: str,
screens_filters_candidates: bool,
scores_interviews_candidates: bool,
targets_vacancy_ads_to_candidates: bool,
monitors_employee_performance: bool,
informs_promotion_termination: bool,
allocates_tasks_to_employees: bool,
allocates_tasks_to_gig_workers: bool,
classifies_gig_worker_account_status: bool,
entity_jurisdiction: str, # "us", "eu_sovereign", "eu_us_parent"
is_in_germany: bool = False
) -> EmploymentAIClassification:
# Determine high-risk category
category = AnnexIIIPoint4Category.NOT_HIGH_RISK
rationale_parts = []
if screens_filters_candidates or scores_interviews_candidates or targets_vacancy_ads_to_candidates:
category = AnnexIIIPoint4Category.RECRUITMENT_SELECTION
if screens_filters_candidates:
rationale_parts.append("screens or filters job applications")
if scores_interviews_candidates:
rationale_parts.append("evaluates candidates during interviews or tests")
if targets_vacancy_ads_to_candidates:
rationale_parts.append("targets vacancy advertising to specific candidate profiles")
elif monitors_employee_performance or informs_promotion_termination or allocates_tasks_to_employees:
category = AnnexIIIPoint4Category.PERFORMANCE_MONITORING
if monitors_employee_performance:
rationale_parts.append("monitors employee behaviour or performance")
if informs_promotion_termination:
rationale_parts.append("informs promotion or termination decisions")
if allocates_tasks_to_employees:
rationale_parts.append("allocates tasks affecting career trajectory")
elif allocates_tasks_to_gig_workers or classifies_gig_worker_account_status:
category = AnnexIIIPoint4Category.SELF_EMPLOYMENT_ACCESS
if allocates_tasks_to_gig_workers:
rationale_parts.append("allocates tasks to self-employed platform workers")
if classifies_gig_worker_account_status:
rationale_parts.append("classifies or restricts gig worker platform access")
# CLOUD Act exposure
if entity_jurisdiction == "us":
cloud_act = CloudActExposure.HIGH
elif entity_jurisdiction == "eu_us_parent":
cloud_act = CloudActExposure.MEDIUM
else:
cloud_act = CloudActExposure.LOW
# BetrVG §87(1) Nr.6 applies in Germany to performance monitoring + task allocation
betrvg_applies = is_in_germany and category in [
AnnexIIIPoint4Category.PERFORMANCE_MONITORING,
AnnexIIIPoint4Category.SELF_EMPLOYMENT_ACCESS
]
# Build obligations list
obligations = []
if category != AnnexIIIPoint4Category.NOT_HIGH_RISK:
obligations = [
"Art.16: Provider conformity assessment before market placement",
"Art.9: Risk management system including bias testing on protected characteristics",
"Art.10: Training data governance — representativeness, bias review",
"Art.13: Transparency — inform affected candidates/employees of AI use",
"Art.14: Human oversight measures — qualified HR professional review before consequential decisions",
"Art.15: Accuracy, robustness, cybersecurity requirements",
"Art.71: Register in EU AI Database before deployment",
"Art.26: Deployer fundamental rights impact assessment",
"Art.26(6): Inform employees/candidates that they are subject to high-risk AI",
"Art.26(7): Log AI use in HR decisions for post-hoc accountability",
]
if betrvg_applies:
obligations.append("BetrVG §87(1) Nr.6: Betriebsvereinbarung required before deployment")
if cloud_act == CloudActExposure.HIGH:
obligations.append("GDPR Art.46: Standard Contractual Clauses + supplementary measures for US data transfers")
obligations.append("GDPR Art.48: No valid transfer basis for CLOUD Act compelled disclosure — document residual risk")
rationale = f"System {', '.join(rationale_parts)}." if rationale_parts else "Advisory tool with no consequential decision authority."
return EmploymentAIClassification(
system_name=system_name,
category=category,
cloud_act_exposure=cloud_act,
betrvg_87_1_applies=betrvg_applies,
rationale=rationale,
obligations=obligations,
not_high_risk_reason=rationale if category == AnnexIIIPoint4Category.NOT_HIGH_RISK else None
)
# Classification table for 8 representative employment AI systems
employment_ai_systems = [
classify_employment_ai(
"Personio AI Candidate Matching",
screens_filters_candidates=True, scores_interviews_candidates=False,
targets_vacancy_ads_to_candidates=False, monitors_employee_performance=False,
informs_promotion_termination=False, allocates_tasks_to_employees=False,
allocates_tasks_to_gig_workers=False, classifies_gig_worker_account_status=False,
entity_jurisdiction="eu_sovereign", is_in_germany=True
),
classify_employment_ai(
"Workday Recruiting AI (ATS Screening)",
screens_filters_candidates=True, scores_interviews_candidates=True,
targets_vacancy_ads_to_candidates=False, monitors_employee_performance=False,
informs_promotion_termination=False, allocates_tasks_to_employees=False,
allocates_tasks_to_gig_workers=False, classifies_gig_worker_account_status=False,
entity_jurisdiction="us", is_in_germany=True
),
classify_employment_ai(
"LinkedIn Recruiter AI Candidate Ranking",
screens_filters_candidates=True, scores_interviews_candidates=False,
targets_vacancy_ads_to_candidates=True, monitors_employee_performance=False,
informs_promotion_termination=False, allocates_tasks_to_employees=False,
allocates_tasks_to_gig_workers=False, classifies_gig_worker_account_status=False,
entity_jurisdiction="eu_us_parent", is_in_germany=False
),
classify_employment_ai(
"Workday Talent Optimization (Performance AI)",
screens_filters_candidates=False, scores_interviews_candidates=False,
targets_vacancy_ads_to_candidates=False, monitors_employee_performance=True,
informs_promotion_termination=True, allocates_tasks_to_employees=False,
allocates_tasks_to_gig_workers=False, classifies_gig_worker_account_status=False,
entity_jurisdiction="us", is_in_germany=True
),
classify_employment_ai(
"Productivity Monitoring AI (Probezeit tracking)",
screens_filters_candidates=False, scores_interviews_candidates=False,
targets_vacancy_ads_to_candidates=False, monitors_employee_performance=True,
informs_promotion_termination=True, allocates_tasks_to_employees=False,
allocates_tasks_to_gig_workers=False, classifies_gig_worker_account_status=False,
entity_jurisdiction="eu_sovereign", is_in_germany=True
),
classify_employment_ai(
"Uber Eats Order Assignment Algorithm",
screens_filters_candidates=False, scores_interviews_candidates=False,
targets_vacancy_ads_to_candidates=False, monitors_employee_performance=False,
informs_promotion_termination=False, allocates_tasks_to_employees=False,
allocates_tasks_to_gig_workers=True, classifies_gig_worker_account_status=True,
entity_jurisdiction="us", is_in_germany=False
),
classify_employment_ai(
"Fiverr Freelancer Ranking Algorithm",
screens_filters_candidates=False, scores_interviews_candidates=False,
targets_vacancy_ads_to_candidates=False, monitors_employee_performance=False,
informs_promotion_termination=False, allocates_tasks_to_employees=False,
allocates_tasks_to_gig_workers=True, classifies_gig_worker_account_status=True,
entity_jurisdiction="us", is_in_germany=False
),
classify_employment_ai(
"Employee Engagement Survey Analytics",
screens_filters_candidates=False, scores_interviews_candidates=False,
targets_vacancy_ads_to_candidates=False, monitors_employee_performance=False,
informs_promotion_termination=False, allocates_tasks_to_employees=False,
allocates_tasks_to_gig_workers=False, classifies_gig_worker_account_status=False,
entity_jurisdiction="eu_sovereign", is_in_germany=True
),
]
for system in employment_ai_systems:
status = system.category.value if system.category != AnnexIIIPoint4Category.NOT_HIGH_RISK else "NOT HIGH-RISK"
print(f"\n{system.system_name}")
print(f" Classification: {status}")
print(f" CLOUD Act Exposure: {system.cloud_act_exposure.value}")
print(f" BetrVG §87(1) Co-determination: {'YES' if system.betrvg_87_1_applies else 'NO'}")
print(f" Rationale: {system.rationale}")
if system.obligations:
print(f" Key obligations: {len(system.obligations)} items")
Output (representative):
Personio AI Candidate Matching
Classification: 4a (Recruitment/Selection)
CLOUD Act Exposure: LOW (EU sovereign entity)
BetrVG §87(1) Co-determination: NO (recruitment, not monitoring)
Rationale: screens or filters job applications
Key obligations: 10 items
Workday Recruiting AI (ATS Screening)
Classification: 4a (Recruitment/Selection)
CLOUD Act Exposure: HIGH (US entity)
BetrVG §87(1) Co-determination: NO (recruitment, not monitoring)
Rationale: screens or filters job applications, evaluates candidates during interviews or tests
Key obligations: 12 items (includes CLOUD Act GDPR transfer obligations)
LinkedIn Recruiter AI Candidate Ranking
Classification: 4a (Recruitment/Selection)
CLOUD Act Exposure: MEDIUM (EU entity, US parent)
BetrVG §87(1) Co-determination: NO
Rationale: screens or filters job applications, targets vacancy advertising to specific candidate profiles
Key obligations: 10 items
Workday Talent Optimization (Performance AI)
Classification: 4b (Performance Monitoring)
CLOUD Act Exposure: HIGH (US entity)
BetrVG §87(1) Co-determination: YES (Germany)
Rationale: monitors employee behaviour or performance, informs promotion or termination decisions
Key obligations: 12 items (includes BetrVG + CLOUD Act)
Productivity Monitoring AI (Probezeit tracking)
Classification: 4b (Performance Monitoring)
CLOUD Act Exposure: LOW
BetrVG §87(1) Co-determination: YES (Germany)
Rationale: monitors employee behaviour or performance, informs promotion or termination decisions
Key obligations: 11 items (includes BetrVG)
Uber Eats Order Assignment Algorithm
Classification: 4c (Self-Employment Access)
CLOUD Act Exposure: HIGH (US entity)
BetrVG §87(1) Co-determination: NO (self-employment, not employment)
Rationale: allocates tasks to self-employed platform workers, classifies or restricts gig worker platform access
Key obligations: 12 items (includes CLOUD Act)
Fiverr Freelancer Ranking Algorithm
Classification: 4c (Self-Employment Access)
CLOUD Act Exposure: HIGH (US entity)
BetrVG §87(1) Co-determination: NO
Rationale: allocates tasks to self-employed platform workers, classifies or restricts gig worker platform access
Key obligations: 12 items
Employee Engagement Survey Analytics
Classification: NOT HIGH-RISK
CLOUD Act Exposure: LOW
BetrVG §87(1) Co-determination: NO
Rationale: Advisory tool with no consequential decision authority.
Key obligations: 0 items
Key Obligations for High-Risk Employment AI
All systems classified under Annex III Point 4 trigger the full Title III obligation stack:
For providers (Art.16): Before placing the system on the market or putting it into service:
- Conduct conformity assessment (Art.43) — self-assessment for most employment AI, notified body review only where specifically required
- Establish risk management system (Art.9) including systematic bias testing across protected characteristics (gender, race, age, disability status) in both training data and model outputs
- Implement technical documentation (Art.11) and maintain it throughout the system's lifecycle
- Register the system in the EU AI Database (Art.71) — mandatory before deployment
- Affix CE marking and issue EU Declaration of Conformity (Art.48, Art.47)
For deployers (Art.26): Before putting a purchased system into use in HR processes:
- Conduct fundamental rights impact assessment — mandatory since the August 2025 applying date for Chapter III
- Implement human oversight arrangements that satisfy Art.14 — for employment decisions, this means a qualified HR professional must independently review consequential AI outputs before decisions are made
- Inform affected candidates and employees that they are subject to high-risk AI (Art.26(6))
- Maintain logs of AI use in HR decisions (Art.26(7))
- Where system shows unacceptable risks, suspend use and notify provider
Art.14 Human Oversight — What Counts in HR Context: The Art.14 "human oversight" requirement for employment AI means more than a nominal approval step. A human who rubber-stamps every AI ranking without independent evaluation does not satisfy Art.14. The oversight must be meaningful: the HR professional must have the competence to understand the AI's limitations, the access to relevant information to evaluate the AI's output, and the authority to override the AI decision. For practical purposes, this means:
- CV screening AI: HR reviewer must review at least a sample of AI-rejected candidates independently
- Interview scoring AI: Interview results from human interviewers must independently validate or override AI scores
- Performance monitoring AI: Manager must document the basis for promotion/termination decisions with or without the AI's input
- Gig platform deactivation AI: Human review required before account deactivation affecting livelihood access
What Counts as NOT High-Risk in Employment Contexts
Not every AI tool in the HR workflow is high-risk under Annex III Point 4:
- Job description spell-checkers and grammar tools: Writing assistance with no candidate screening function
- Schedule optimisation tools that allocate shifts based purely on employee availability preferences (where no performance scoring is involved)
- Skills gap analysis tools that recommend training content without informing promotion decisions
- Employee engagement survey analytics that produce aggregate sentiment data for management awareness
- Benefits platform recommendation engines that suggest benefit options to employees
- Generic job board search on the candidate side (where candidates search for jobs and the AI helps them find relevant postings — no employer-side filtering)
- Onboarding chatbots that answer new employee questions
The common thread in the NOT high-risk category: the AI produces advisory output that neither determines access to employment opportunities nor directly influences continuation of employment, and its output is not treated by HR processes as a binding or de facto binding screening result.
The Fundamental Rights Impact Assessment Obligation
Art.26(9) of the EU AI Act requires deployers of high-risk AI systems listed in Annex III to conduct a Fundamental Rights Impact Assessment (FRIA) before putting the system into use. For employment AI, this is a significant compliance undertaking that most HR teams have not yet addressed.
The FRIA for employment AI must assess:
Protected characteristics bias risk: Does the AI system produce disparate impact on candidates or employees defined by gender, race, ethnicity, nationality, religion, disability, age, or sexual orientation? This requires testing — not a legal analysis — using representative test datasets that include these protected characteristics.
Power imbalance: Employment decisions are structurally asymmetric — the candidate or employee has materially less power than the employer. The FRIA must document how the AI system's design mitigates rather than amplifies this asymmetry.
Contestability: Can a candidate who was screened out by ATS AI effectively contest the decision? Art.26 requires that affected persons have a meaningful recourse path. For German employers, this intersects with BetrVG §83 (employee file access) and §85 (complaint procedure).
Cumulative effects: Where multiple AI systems are used in sequence — ATS screening feeds into interview scheduling AI which feeds into reference check automation — the FRIA must address the cumulative effect of the chain, not just each system in isolation.
25-Item Compliance Checklist: Employment AI Under Annex III Point 4
Classification and Scoping
- Identify all AI features in your HR tech stack that screen, filter, rank, or score candidates (ATS, job targeting, interview AI)
- Identify all AI features that monitor employee performance, inform promotion decisions, or allocate tasks
- Identify all AI features in gig/platform contexts that determine task allocation or account status for self-employed workers
- Confirm that purely advisory and productivity tools (training recommenders, benefit chatbots) are documented as NOT high-risk with rationale
- Map each high-risk system to its provider entity and confirm whether provider is EU or US headquartered (CLOUD Act exposure)
Provider Obligations (if you build the system) 6. [ ] Complete conformity assessment (Art.43) for each high-risk employment AI system before market placement 7. [ ] Establish Art.9 risk management system with documented bias testing methodology across protected characteristics 8. [ ] Conduct Art.10 training data audit — representativeness review for gender, race, age, disability status balance 9. [ ] Prepare Art.11 technical documentation including description of system logic, performance metrics, and known limitations 10. [ ] Register system in EU AI Database (Art.71) — mandatory before deployment or market placement 11. [ ] Affix CE marking and issue EU Declaration of Conformity (Art.47, Art.48) 12. [ ] Establish post-market monitoring system (Art.72) to detect bias emergence and accuracy drift in production
Deployer Obligations (if you buy and deploy the system) 13. [ ] Obtain technical documentation and AI system card from provider — verify it covers the specific use case 14. [ ] Conduct Fundamental Rights Impact Assessment (Art.26(9)) before deployment — document bias risk, contestability, cumulative effects 15. [ ] Document Art.14 human oversight arrangements — specify which HR role reviews AI outputs, with what competence, and with what authority to override 16. [ ] Implement Art.26(6) notification — inform candidates and employees they are subject to high-risk AI before processing 17. [ ] Establish Art.26(7) logging — maintain records of AI use in HR decisions for post-hoc accountability and audit 18. [ ] Define internal escalation procedure for AI outputs that appear erroneous or biased
German-Specific (BetrVG §87(1) Nr.6) 19. [ ] Identify whether performance monitoring or task allocation AI triggers Betriebsrat co-determination obligation 20. [ ] Initiate Betriebsvereinbarung negotiation before vendor contract signature (not after deployment) 21. [ ] Ensure Betriebsvereinbarung covers: data processed, decision authority of AI vs human, employee notification, audit rights for Betriebsrat
CLOUD Act and Data Transfer 22. [ ] For US-headquartered HR platforms (Workday, Oracle HCM, BambooHR), document CLOUD Act residual risk in FRIA 23. [ ] Ensure Standard Contractual Clauses and supplementary measures are in place for US data transfers (GDPR Art.46) 24. [ ] Consider EU-sovereign alternatives for highest-risk employment data (hiring decisions, performance reviews, termination documentation)
Ongoing Monitoring 25. [ ] Schedule annual bias audit for each deployed employment AI system — track outcomes across protected characteristics and document results
Annex III High-Risk Categories Series
This post is part of a systematic series covering all eight Annex III high-risk category points of the EU AI Act:
| Point | Category | Status |
|---|---|---|
| Point 1 | Biometric Identification, Categorisation, and Emotion Recognition | Published → |
| Point 2 | Critical Infrastructure AI (Water, Energy, Transport, NIS2 Dual Compliance) | Published → |
| Point 3 | Education and Vocational Training AI (University Admission, Exam Proctoring) | Published → |
| Point 4 | Employment, Workers Management and Access to Self-Employment | This article |
| Point 5 | Essential Services (Credit Scoring, Insurance, Public Benefits) | Coming soon |
| Point 6 | Law Enforcement (Risk Assessment, Criminal Profiling, Evidence AI) | Coming soon |
| Point 7 | Migration and Border Management (Irregular Migration Risk AI) | Coming soon |
| Point 8 | Administration of Justice (Judicial AI, Dispute Resolution) | Coming soon |
See Also
- EU AI Act Art.14: Human Oversight Requirements for High-Risk AI Systems — The human oversight obligations that apply to all Annex III Point 4 employment AI decisions
- EU AI Act Art.5: Prohibited AI Practices — The absolute prohibition on social scoring that can interact with employee performance monitoring AI
- EU AI Act Annex III Point 1: Biometric AI — Workplace biometric access AI that runs parallel to employment AI obligations
- EU AI Act Annex III Point 3: Education AI — The same consequential decision pattern in educational contexts
- EU AI Act Annex III Point 5: Essential Services AI — Credit Scoring, Insurance, Public Benefits — Consumer-consequential AI in financial and social services