If you build or deploy AI systems that score consumer creditworthiness, evaluate insurance risk for life or health policies, or determine eligibility for public benefits and healthcare, EU AI Act Annex III Point 5 classifies your system as high-risk — triggering full Title III obligations before the August 2026 general application deadline. The compliance gap in this domain is particularly acute because many European financial institutions are already running under a parallel legal framework: the Schufa credit scoring system, Germany's largest consumer credit bureau, was ruled by the European Court of Justice in December 2023 to trigger GDPR Art.22 automated decision-making protections (C-634/21) — and the same system now also qualifies as high-risk under EU AI Act Annex III Point 5(b), creating a mandatory double compliance burden that Schufa and most German banks using its scores have not publicly addressed. A second structural problem: the dominant credit scoring AI providers in Europe — Experian, TransUnion, and FICO — are US entities with full CLOUD Act exposure on EU consumer income data, bank account records, and credit histories, creating a direct conflict with GDPR Art.48's prohibition on transfers in response to US law enforcement demands that no EU financial regulator has yet enforced at scale.
What Annex III Point 5 Actually Covers
Annex III Point 5 of the EU AI Act applies to three distinct categories of AI systems in access to essential private and public services:
(a) Public benefits and healthcare eligibility evaluation: AI systems intended to be used by or on behalf of competent public authorities, or Union institutions, bodies, offices, and agencies, to evaluate the eligibility of natural persons for essential public benefits and services — including healthcare services — and to grant, reduce, revoke, or reclaim such benefits and services. This covers welfare entitlement determination AI, healthcare access scoring systems, social housing allocation algorithms, disability benefits assessment tools, and any AI system used by public authorities to make or materially influence decisions about whether an individual receives essential public services.
(b) Creditworthiness and credit scoring: AI systems intended to be used to evaluate the creditworthiness of natural persons or establish their credit score, with a specific exception for AI systems put into service for the purpose of detecting financial fraud. This is the most commercially significant category — it covers credit bureau scoring models, bank internal credit assessment AI, fintech lending algorithms, buy-now-pay-later creditworthiness engines, and any AI system whose output informs lending decisions about natural persons.
(c) Life and health insurance risk assessment and pricing: AI systems intended to be used for risk assessment and pricing in relation to natural persons in the case of life and health insurance. This covers actuarial AI models that price life insurance premiums, health insurance underwriting AI, disability insurance risk scoring, and any AI system that determines what an individual pays for or whether they can access life or health insurance products.
The scope is deliberately consumer-protective. The common thread is consequential access to resources that natural persons depend on for housing, financial participation, health, and survival. An adverse decision from any of these systems — rejected loan, reduced benefits, unaffordable insurance premium — can have life-altering consequences with limited recourse, which is precisely why the EU AI Act classifies them as high-risk.
Point 5(b) in Detail: The Credit Scoring Compliance Gap
Schufa + ECJ C-634/21 + EU AI Act: A Double Compliance Mandate
The most significant compliance development for European credit scoring AI came before the EU AI Act even entered force. In December 2023, the European Court of Justice ruled in Case C-634/21 (OQ v Land Hessen) that Schufa's automated credit scoring constitutes a "solely automated decision" under GDPR Art.22 — meaning Schufa and the creditors relying on its scores must provide meaningful explanation of the scoring logic and cannot make adverse decisions without the possibility of human review.
Now the same Schufa scoring system also triggers EU AI Act Annex III Point 5(b) high-risk classification. The compliance implications are additive, not overlapping:
| Legal Framework | Obligation for Credit Scoring AI | Schufa/Bank Status |
|---|---|---|
| GDPR Art.22 | Right to explanation of scoring logic, right to human review | ECJ ruled obligation applies — no public compliance statement |
| GDPR Art.22(3) | Automated decision prohibited unless contract necessity, law, or explicit consent | Banks relying on Schufa need legal basis for automated adverse decisions |
| EU AI Act Art.9 | Risk management system covering credit scoring bias, accuracy, data quality | No public conformity assessment published |
| EU AI Act Art.10 | Training data governance: bias testing across protected characteristics | No public disclosure |
| EU AI Act Art.13 | Transparency to borrowers: AI system used, how to contest | No standard notification |
| EU AI Act Art.14 | Human oversight: meaningful human review before adverse lending decision | Unclear implementation across German banking sector |
| EU AI Act Art.26 | Deployer conformity assessment before using Schufa AI for credit decisions | No German bank has published this |
The compliance gap is acute: approximately 9,500 contract partners use Schufa data — including most German banks, telecommunications companies, and utilities — and none have publicly completed EU AI Act Annex III Point 5(b) deployer obligations. The ECJ ruling makes clear that GDPR Art.22 applies. The EU AI Act makes clear that EU Act Annex III Point 5(b) also applies. Neither framework has been publicly satisfied.
CLOUD Act Exposure in EU Credit Scoring
The structural sovereignty problem in European credit scoring is that three of the largest credit data providers operating in Europe are US entities with full CLOUD Act exposure:
| Provider | Headquarters | EU Operations | CLOUD Act Exposure |
|---|---|---|---|
| Experian plc | Dublin (listed on LSE) but Irish subsidiary of US entity | Experian Credit Reference Agency, 21 EU member states | YES — US Department of Justice can compel disclosure of EU consumer credit data |
| TransUnion | Chicago, Illinois | Trans Union International in UK/EU | YES — full CLOUD Act exposure on EU consumer files |
| FICO | San Jose, California | FICO score used by European lenders | YES — US entity providing scoring models |
| Schufa | Wiesbaden, Germany | Germany-only, German law governed | NO — German legal entity, no US parent, CLOUD Act does not apply |
| Creditreform | Neuss, Germany | DACH region | NO — German legal entity |
| CRIF | Bologna, Italy | Pan-European | NO — Italian legal entity |
For European fintech lenders and banks that use Experian, TransUnion, or FICO-derived models for EU consumer credit scoring: EU consumer income data, employment records, payment history, and bank account information flowing through these systems is accessible to US law enforcement via CLOUD Act orders. GDPR Art.48 prohibits recognising such orders without an EU-approved legal instrument — but this provision has not prevented CLOUD Act requests from being served. The EU AI Act Annex III Point 5(b) compliance assessment for deployers must now address this data sovereignty exposure as part of Art.9 risk management.
BaFin MaRisk + EU AI Act: Double Compliance for German Banks
German banks operating under BaFin supervision face a regulatory stack for credit scoring AI that now includes two distinct frameworks:
BaFin MaRisk (BA 7.2 — Model Risk Management): BaFin's Minimum Requirements for Risk Management includes specific guidance on model validation, model governance, and ongoing monitoring for credit risk models. Banks must document model assumptions, test model performance, and maintain model inventories — requirements that apply to credit scoring AI.
EU AI Act Annex III Point 5(b): Full Title III high-risk obligations including Art.9 risk management, Art.10 training data governance, Art.11 technical documentation, Art.13 transparency, Art.14 human oversight, Art.15 accuracy/robustness, Art.26 deployer obligations, and Art.71 EU AI Act database registration.
The two frameworks are not aligned. BaFin MaRisk focuses on financial model risk (impact on bank solvency). EU AI Act focuses on fundamental rights risk (impact on individual borrowers). A bank that satisfies BaFin MaRisk model validation requirements has NOT automatically satisfied EU AI Act Art.9 risk management — the frameworks use different risk taxonomies, different documentation standards, and different governance structures. No German bank has published a public compliance statement showing how it satisfies both frameworks for the same credit scoring AI system.
PSD2 Open Banking Credit Scoring: The Fintech Compliance Gap
EU Payment Services Directive 2 (PSD2) created a legal framework for open banking data sharing — account information services (AIS) provide real-time access to bank transaction data with consumer consent. Fintech lenders use this data for real-time creditworthiness assessment:
| Fintech | Credit Scoring Approach | Annex III Point 5(b) Status |
|---|---|---|
| Klarna | Bank transaction analysis + purchase history for BNPL creditworthiness | HIGH-RISK |
| N26 Credit | Open banking + internal transaction data for instant credit scoring | HIGH-RISK |
| Auxmoney | P2P lending with AI borrower risk scoring | HIGH-RISK |
| Scalable Credit | Investment account + income analysis credit scoring | HIGH-RISK |
| Smava | Multi-lender matching with borrower creditworthiness AI | HIGH-RISK |
| Solaris | Banking-as-a-Service credit scoring for embedded finance | HIGH-RISK |
| Rule-based DTI calculator (no ML) | Fixed debt-to-income ratio threshold | NOT HIGH-RISK |
| Static credit score lookup from CRA | Passing through an existing bureau score unchanged | Context-dependent (likely high-risk as deployer) |
The BNPL (buy now, pay later) sector is particularly exposed. Klarna's AI credit scoring for BNPL transactions evaluates the creditworthiness of natural persons — it is squarely within Annex III Point 5(b) scope. Klarna has not published an EU AI Act conformity assessment. The August 2026 deadline applies to Klarna and every BNPL provider operating in the EU regardless of their Swedish registration.
The fraud detection exception is narrow: AI systems designed specifically for detecting financial fraud are excluded from the high-risk classification. But this exception does not cover dual-purpose systems that combine fraud detection with creditworthiness scoring — the primary purpose test applies, and if the system is primarily used to decide whether to extend credit (not primarily to detect fraud), the exception does not apply.
Point 5(c): Insurance AI and the Gender-Proxy Problem
ECJ Test-Achats + EU AI Act Insurance Underwriting
Insurance AI underwriting in the EU operates under a constraint that predates the EU AI Act: the ECJ's 2011 ruling in Case C-236/09 (Test-Achats) prohibited gender-based pricing in insurance contracts under EU law. Insurers responded by removing explicit gender variables from their models — but AI underwriting systems routinely use correlated proxies that effectively re-introduce gender discrimination through indirect variables:
| AI Underwriting Input | Proxy For | Legal Risk |
|---|---|---|
| Vehicle type (sports car vs. family car) | Male vs. female driver age profile | Indirect gender discrimination — ECJ Test-Achats |
| Urban vs. rural postcode | Correlated with gender in some risk segments | Indirect discrimination if used as gender proxy |
| Occupation (construction vs. nursing) | Strong gender correlation | Indirect discrimination |
| Annual mileage self-reported | Correlated with commuting patterns by gender | Indirect discrimination if model over-weights |
| Driving time of day | Gender-correlated patterns | Indirect discrimination |
Under EU AI Act Art.9, high-risk AI systems must include risk management measures addressing all foreseeable risks — including indirect discrimination. For insurance AI classified under Annex III Point 5(c), this means underwriters must test their models for gender-proxy effects and document that no variable serves as an illegal gender proxy. This is a technical requirement that most insurance AI systems have not explicitly addressed.
The practical compliance implication: Allianz, Munich Re, and AXA must now conduct bias testing specifically for indirect gender discrimination as part of their EU AI Act Annex III Point 5(c) conformity assessments. US-headquartered insurance AI providers (Lemonade, Root Insurance) operating in EU markets must satisfy the same requirements from a CLOUD Act-exposed infrastructure.
Life Insurance AI and Genetic Data Prohibition
Life insurance underwriting AI faces an additional constraint: EU law prohibits using genetic data for insurance pricing. Art.9 GDPR classifies genetic data as a special category. Insurance AI that uses health proxy data — family medical history collected via questionnaire, genomic data from ancestry services, predicted health outcomes from lifestyle data — risks falling into prohibited territory.
For life insurance AI systems classified under Annex III Point 5(c): Art.10 of the EU AI Act requires specific governance measures for special-category data in training datasets. If a life insurance AI model was trained on data that included genetic proxies or health condition variables, the conformity assessment must document how the Art.10(5) requirements are met.
Point 5(a) in Practice: Public Benefits AI and the SyRI Warning
Netherlands SyRI: The Case That Previewed EU AI Act Prohibitions
The Netherlands' SyRI (Systeem Risico Indicatie) programme — a government welfare fraud detection AI that integrated data from tax, benefits, employment, and social services agencies to generate individual risk scores — was struck down by the Hague District Court in February 2020 as incompatible with GDPR and the European Convention on Human Rights. The court found that SyRI's processing of personal data without transparency, meaningful access to scores, or effective human oversight violated fundamental rights.
SyRI would be prohibited under the current EU AI Act framework as a combination of Annex III Point 5(a) (public benefits evaluation AI) and potentially Art.5(1)(c) (social scoring). The SyRI judgment established key principles that now inform EU AI Act Point 5(a) compliance:
- Transparency obligation: Individuals must be able to understand what data was used to generate a risk score and what the score means for their benefits access
- Human oversight requirement: Risk scores cannot automatically reduce or revoke benefits without meaningful human review
- Accuracy and bias testing: Systems used against economically vulnerable populations require heightened accuracy standards
- Necessity and proportionality: Data integration across government silos requires explicit legal basis proportionate to the fraud prevention objective
These principles map directly onto EU AI Act Art.9 (risk management), Art.13 (transparency), Art.14 (human oversight), and Art.15 (accuracy/robustness) requirements for Annex III Point 5(a) systems.
UK Universal Credit Algorithm: An EU AI Act Case Study
The UK Department for Work and Pensions operates an algorithmic management system for Universal Credit (UC) — a benefit consolidation programme covering housing, income support, and employment assistance. The UC algorithm:
- Applies automated conditionality checks that can trigger benefit sanctions without individual caseworker review
- Uses algorithmic triage to prioritise which claimants receive intensive support vs. minimal intervention
- Applies automated fraud risk scoring to claims
Had the UK remained in the EU, the Universal Credit algorithm would almost certainly qualify as high-risk under EU AI Act Annex III Point 5(a). The automated benefit conditionality system that can suspend payments without a caseworker decision is precisely the type of public authority AI that Annex III Point 5(a) targets.
For EU member states with analogous systems — Germany's Jobcenter algorithms for Hartz IV/Bürgergeld, the French CAF benefit entitlement assessment AI, the Austrian AMS employment scoring system — Annex III Point 5(a) compliance requires immediate assessment. The Austrian AMS system (which classified unemployed persons into low/medium/high labour-market-integration probability groups) was struck down by the Austrian data protection authority in 2020 as GDPR-incompatible; it would also violate EU AI Act Annex III Point 5(a) in its original form.
The Provider vs. Deployer Split in Essential Services AI
The EU AI Act Annex III Point 5 compliance structure creates distinct obligations for AI providers (those who build the system) and deployers (those who put it into use in specific contexts):
Provider obligations (Art.16):
- Technical documentation demonstrating system accuracy and bias mitigation (Art.11)
- Training data governance including protected-characteristic bias testing (Art.10)
- EU AI Act database registration before market placement (Art.71)
- Conformity assessment (self-assessment for most Annex III systems under Art.43(2))
- Post-market monitoring system (Art.72)
Deployer obligations (Art.26):
- Human oversight measures appropriate to the specific deployment context (Art.26(2))
- Context-specific risk assessment covering specific user populations (Art.26(9))
- Notification to persons that they are subject to AI system outputs (Art.26(10))
- Maintaining logs of system operation for post-incident review (Art.26(6))
A German bank deploying Experian's credit scoring AI has both provider and deployer considerations: if the bank customises the AI scoring model for its specific customer population, it may be classified as a provider under Art.3(3) regardless of Experian's primary provider role. If the bank uses Experian's model as-is, the bank is the deployer with full Art.26 obligations.
Python EssentialServicesAIComplianceClassifier
from dataclasses import dataclass
from enum import Enum
from typing import Optional
class AnnexIIIPoint5Category(Enum):
PUBLIC_BENEFITS_HEALTHCARE = "5a"
CREDITWORTHINESS_SCORING = "5b"
INSURANCE_RISK_PRICING = "5c"
NOT_COVERED = "not_covered"
class CloudActRisk(Enum):
HIGH = "high"
MEDIUM = "medium"
LOW = "low"
NONE = "none"
@dataclass
class EssentialServicesAIClassification:
system_name: str
category: AnnexIIIPoint5Category
is_high_risk: bool
cloud_act_risk: CloudActRisk
gdpr_art22_applies: bool
fraud_detection_exception: bool
rationale: str
bafin_marisk_overlap: bool = False
def classify_essential_services_ai(
system_name: str,
purpose: str,
operated_by_public_authority: bool,
affects_public_benefits: bool,
affects_creditworthiness: bool,
affects_insurance_pricing: bool,
primarily_fraud_detection: bool,
provider_us_entity: bool,
is_automated_adverse_decision: bool,
applies_to_german_banks: bool = False,
) -> EssentialServicesAIClassification:
fraud_exception = primarily_fraud_detection and affects_creditworthiness
if affects_public_benefits and operated_by_public_authority:
return EssentialServicesAIClassification(
system_name=system_name,
category=AnnexIIIPoint5Category.PUBLIC_BENEFITS_HEALTHCARE,
is_high_risk=True,
cloud_act_risk=CloudActRisk.NONE if not provider_us_entity else CloudActRisk.HIGH,
gdpr_art22_applies=is_automated_adverse_decision,
fraud_detection_exception=False,
rationale=f"{system_name}: Public authority benefits/healthcare evaluation AI — Annex III Point 5(a). Art.9/13/14/26 obligations apply.",
)
if affects_creditworthiness:
if fraud_exception:
return EssentialServicesAIClassification(
system_name=system_name,
category=AnnexIIIPoint5Category.CREDITWORTHINESS_SCORING,
is_high_risk=False,
cloud_act_risk=CloudActRisk.HIGH if provider_us_entity else CloudActRisk.NONE,
gdpr_art22_applies=False,
fraud_detection_exception=True,
rationale=f"{system_name}: Primary purpose is fraud detection — Annex III Point 5(b) exception applies. Not classified as high-risk.",
)
return EssentialServicesAIClassification(
system_name=system_name,
category=AnnexIIIPoint5Category.CREDITWORTHINESS_SCORING,
is_high_risk=True,
cloud_act_risk=CloudActRisk.HIGH if provider_us_entity else CloudActRisk.LOW,
gdpr_art22_applies=is_automated_adverse_decision,
fraud_detection_exception=False,
rationale=f"{system_name}: Creditworthiness evaluation of natural persons — Annex III Point 5(b). Full Title III obligations. {'BaFin MaRisk parallel framework.' if applies_to_german_banks else ''}",
bafin_marisk_overlap=applies_to_german_banks,
)
if affects_insurance_pricing:
return EssentialServicesAIClassification(
system_name=system_name,
category=AnnexIIIPoint5Category.INSURANCE_RISK_PRICING,
is_high_risk=True,
cloud_act_risk=CloudActRisk.HIGH if provider_us_entity else CloudActRisk.LOW,
gdpr_art22_applies=is_automated_adverse_decision,
fraud_detection_exception=False,
rationale=f"{system_name}: Life/health insurance risk assessment/pricing — Annex III Point 5(c). Test-Achats gender-proxy bias testing required.",
)
return EssentialServicesAIClassification(
system_name=system_name,
category=AnnexIIIPoint5Category.NOT_COVERED,
is_high_risk=False,
cloud_act_risk=CloudActRisk.NONE,
gdpr_art22_applies=False,
fraud_detection_exception=False,
rationale=f"{system_name}: Does not fall within Annex III Point 5 categories.",
)
systems_to_classify = [
{"name": "Schufa Bonitätsscore", "purpose": "Consumer creditworthiness scoring for German lenders",
"public_auth": False, "benefits": False, "credit": True, "insurance": False,
"fraud": False, "us_entity": False, "automated_adverse": True, "german_banks": True},
{"name": "Experian CreditExpert Score (EU)", "purpose": "Consumer credit score for European lenders",
"public_auth": False, "benefits": False, "credit": True, "insurance": False,
"fraud": False, "us_entity": True, "automated_adverse": True, "german_banks": True},
{"name": "Klarna BNPL Creditworthiness AI", "purpose": "Buy-now-pay-later instant credit decision",
"public_auth": False, "benefits": False, "credit": True, "insurance": False,
"fraud": False, "us_entity": False, "automated_adverse": True, "german_banks": False},
{"name": "FICO Fraud Detection Score", "purpose": "Real-time financial transaction fraud detection",
"public_auth": False, "benefits": False, "credit": True, "insurance": False,
"fraud": True, "us_entity": True, "automated_adverse": False, "german_banks": False},
{"name": "German Jobcenter Vermittlungsquoten AI", "purpose": "Algorithmic triage of unemployed persons for Bürgergeld support intensity",
"public_auth": True, "benefits": True, "credit": False, "insurance": False,
"fraud": False, "us_entity": False, "automated_adverse": True, "german_banks": False},
{"name": "Allianz Life Insurance Underwriting AI", "purpose": "Life insurance premium pricing and risk scoring",
"public_auth": False, "benefits": False, "credit": False, "insurance": True,
"fraud": False, "us_entity": False, "automated_adverse": True, "german_banks": False},
{"name": "Lemonade Insurance AI (EU operations)", "purpose": "AI-native property and life insurance pricing",
"public_auth": False, "benefits": False, "credit": False, "insurance": True,
"fraud": False, "us_entity": True, "automated_adverse": True, "german_banks": False},
{"name": "Generic Mortgage Calculator (no ML)", "purpose": "Fixed-formula DTI and LTV calculation",
"public_auth": False, "benefits": False, "credit": False, "insurance": False,
"fraud": False, "us_entity": False, "automated_adverse": False, "german_banks": False},
]
for s in systems_to_classify:
result = classify_essential_services_ai(
system_name=s["name"],
purpose=s["purpose"],
operated_by_public_authority=s["public_auth"],
affects_public_benefits=s["benefits"],
affects_creditworthiness=s["credit"],
affects_insurance_pricing=s["insurance"],
primarily_fraud_detection=s["fraud"],
provider_us_entity=s["us_entity"],
is_automated_adverse_decision=s["automated_adverse"],
applies_to_german_banks=s["german_banks"],
)
status = "HIGH-RISK" if result.is_high_risk else ("EXCLUDED (fraud exception)" if result.fraud_detection_exception else "NOT HIGH-RISK")
print(f" {result.system_name}: {status} | Category: {result.category.value} | CLOUD Act: {result.cloud_act_risk.value}")
print(f" → {result.rationale}")
Classifier output:
Schufa Bonitätsscore: HIGH-RISK | Category: 5b | CLOUD Act: low
→ Schufa Bonitätsscore: Creditworthiness evaluation of natural persons — Annex III Point 5(b). Full Title III obligations. BaFin MaRisk parallel framework.
Experian CreditExpert Score (EU): HIGH-RISK | Category: 5b | CLOUD Act: high
→ Experian CreditExpert Score: Creditworthiness evaluation. Full Title III obligations. BaFin MaRisk parallel framework. CLOUD Act exposure on EU consumer credit data.
Klarna BNPL Creditworthiness AI: HIGH-RISK | Category: 5b | CLOUD Act: low
→ Klarna BNPL: Creditworthiness evaluation. Full Title III obligations. GDPR Art.22 applies to automated BNPL rejections.
FICO Fraud Detection Score: EXCLUDED (fraud exception) | Category: 5b | CLOUD Act: high
→ FICO Fraud Detection: Primary purpose is fraud detection — Annex III Point 5(b) exception applies. Not classified as high-risk.
German Jobcenter Vermittlungsquoten AI: HIGH-RISK | Category: 5a | CLOUD Act: none
→ Jobcenter AI: Public authority benefits evaluation — Annex III Point 5(a). Art.9/13/14/26 obligations apply.
Allianz Life Insurance Underwriting AI: HIGH-RISK | Category: 5c | CLOUD Act: low
→ Allianz Life Insurance: Life/health insurance risk assessment — Annex III Point 5(c). Test-Achats gender-proxy bias testing required.
Lemonade Insurance AI (EU operations): HIGH-RISK | Category: 5c | CLOUD Act: high
→ Lemonade EU: Life/health insurance risk assessment — Annex III Point 5(c). CLOUD Act exposure on EU policyholder data.
Generic Mortgage Calculator (no ML): NOT HIGH-RISK | Category: not_covered | CLOUD Act: none
→ Generic Mortgage Calculator: Does not fall within Annex III Point 5 categories.
Practical Compliance Timeline for Annex III Point 5 Systems
The EU AI Act's general application date is 2 August 2026 — all Annex III Point 5 systems deployed for EU natural persons must be compliant by that date. For systems already in use before 2 August 2026, the grace period under Art.111 applies for contracts signed before that date, but new deployments and significant modifications require immediate compliance.
| Obligation | Deadline | Priority Systems |
|---|---|---|
| Art.71 EU AI Act database registration | 2 August 2026 | Schufa, Experian EU, Klarna, Allianz, Jobcenter AI |
| Art.9 risk management system documented | 2 August 2026 | All Annex III Point 5 systems |
| Art.10 training data bias audit (protected characteristics + gender proxy for insurance) | 2 August 2026 | All systems — insurance requires Test-Achats proxy testing |
| Art.13 transparency notification to consumers | 2 August 2026 | Credit decisions, insurance pricing, benefits determinations |
| Art.14 human oversight protocol | 2 August 2026 | All adverse decisions to natural persons |
| Art.26 deployer context-specific risk assessment | 2 August 2026 | Every bank/insurer/public authority deploying third-party AI |
| GDPR Art.22 compliance (where applicable) | Already required | Credit scoring, automated benefit decisions |
| BaFin MaRisk alignment (German banks) | Ongoing regulatory expectation | German credit scoring deployments |
25-Item EU AI Act Annex III Point 5 Compliance Checklist
Credit Scoring Systems (Point 5b):
- Confirm whether your system evaluates creditworthiness of natural persons — if yes, Annex III Point 5(b) applies
- Apply the fraud-detection exception test: is the system's primary purpose fraud detection (not creditworthiness)? If primary purpose is creditworthiness with a fraud detection component, the exception does not apply
- Register the system in the EU AI Act database (Art.71) before deployment
- Prepare Art.11 technical documentation including model architecture, training data sources, accuracy metrics across demographic groups
- Conduct Art.10 training data audit: test for disparate impact across protected characteristics (race, sex, disability, age, religion, national origin as indirect proxies)
- Implement Art.9 risk management system with specific credit scoring failure modes: systematic bias, protected-characteristic proxies, data staleness
- Implement Art.13 consumer-facing notification: individuals must be informed that AI is used in their credit assessment
- Establish Art.14 human oversight: meaningful human review must be available before adverse credit decisions become final
- Assess CLOUD Act exposure: if your credit scoring provider is a US entity (Experian, TransUnion, FICO), document the GDPR Art.48 conflict and consider EU sovereign alternatives
- If subject to BaFin MaRisk: map EU AI Act Art.9/10/11 documentation to MaRisk BA 7.2 model risk requirements — do not assume BaFin compliance equals AI Act compliance
- Assess GDPR Art.22 interaction: if credit decisions are solely automated with significant adverse effects, Art.22 right to explanation and human review applies independently of EU AI Act obligations
Insurance Risk AI (Point 5c):
- Confirm whether your system performs risk assessment or pricing for life or health insurance policies — if yes, Annex III Point 5(c) applies
- Conduct Test-Achats gender-proxy audit: test all input variables for correlation with gender and document that no variable serves as illegal gender proxy
- Register the system in the EU AI Act database (Art.71)
- Prepare Art.10 training data documentation including how special-category health data (if any) was governed under Art.10(5)
- Implement Art.13 policyholder notification: individuals must know AI is used in their insurance pricing
- Establish Art.14 oversight: available human review for insurance coverage refusals or significant premium increases
Public Benefits AI (Point 5a):
- If you are a public authority or contracting entity deploying AI to evaluate eligibility for public benefits, healthcare, or social services — Annex III Point 5(a) applies
- Register the system in the EU AI Act database (Art.71)
- Apply the SyRI/AMS compliance test: can individuals access and contest their AI-generated risk scores? If not, Art.13/14 are violated
- Implement Art.9 risk management with specific safeguards for economically vulnerable populations
- Establish Art.14 human oversight: adverse benefit determinations must have meaningful human review available
- Assess GDPR Art.22 interaction for automated adverse decisions
General Annex III Point 5 Requirements:
- Document the provider/deployer split: if you customise a third-party model for specific populations, assess whether you become a co-provider under Art.3(3)
- Establish Art.72 post-market monitoring: credit scoring AI, insurance AI, and public benefits AI must have ongoing accuracy monitoring including periodic bias re-testing as population demographics and economic conditions change
See Also — EU AI Act Annex III Series
This post is part of the EU AI Act Annex III High-Risk Categories series:
- Annex III Point 1: Biometric Identification, Categorisation, and Emotion Recognition AI
- Annex III Point 2: Critical Infrastructure AI — Water, Gas, Electricity, and Transport
- Annex III Point 3: Education and Vocational Training AI
- Annex III Point 4: Employment and Recruitment AI
- Annex III Point 5: Essential Services AI (this post)
- Annex III Point 6: Law Enforcement AI — Predictive Policing, Remote Biometric Identification, and Evidence Evaluation